Goto

Collaborating Authors

 human and machine learning


Applied Scientist: Human and Machine Learning

#artificialintelligence

Amazon's Global Learning & Development (GLD) Learning Science and Engineering team is growing quickly and is looking for a skilled, driven, applied scientist to develop solutions and innovate at the intersection of human and machine learning. The Learning Science and Engineering org is reinventing workplace learning by building the programs, products, technologies and mechanisms that make learning effective and scalable for Amazon employees. We use machine learning to augment human decision-making to accelerate and personalize learning and skill development. We have a passion for raising the bar on learning and learning design at Amazon, and simultaneously contributing to the science of human and machine learning. We partner with multiple businesses across Amazon to explore ways to help Amazon employees grow their knowledge and capabilities.


Who is this Explanation for? Human Intelligence and Knowledge Graphs for eXplainable AI

Celino, Irene

arXiv.org Artificial Intelligence

eXplainable AI focuses on generating explanations for the output of an AI algorithm to a user, usually a decision-maker. Such user needs to interpret the AI system in order to decide whether to trust the machine outcome. When addressing this challenge, therefore, proper attention should be given to produce explanations that are interpretable by the target community of users. In this chapter, we claim for the need to better investigate what constitutes a human explanation, i.e. a justification of the machine behaviour that is interpretable and actionable by the human decision makers. In particular, we focus on the contributions that Human Intelligence can bring to eXplainable AI, especially in conjunction with the exploitation of Knowledge Graphs. Indeed, we call for a better interplay between Knowledge Representation and Reasoning, Social Sciences, Human Computation and Human-Machine Cooperation research -- as already explored in other AI branches -- in order to support the goal of eXplainable AI with the adoption of a Human-in-the-Loop approach.


Disco: Workshop on Human and Machine Learning in Games

Krause, Markus (Leibniz University) | Bry, François (Ludwig-Maximilians University) | Georgescu, Mihai (Leibniz University)

AAAI Conferences

Exploiting the playfulness of games has been extremely successful in bringing humans “in the loop” to solve com­plex computational tasks that would otherwise be hardly tractable. Although many proposals and systems after this paradigm have been developed, deployed, and tested, the relationship between play and human computation still de­serves more investigations. Most work in human computa­tion focuses on the ability for the machine to exploit, or learn from, humans. The workshop has a slightly different focus: the exploration of extending “I learn” (“disco” in Latin) to machines and humans alike. Games hold tremen­dous potential for discovery related to human and machine computation because of the intrinsic relation between play and learning. Extending and building upon the focus of past workshops on games and human computation Disco aims at exploring the intersection of entertainment, learning and human computation.